Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Adversarial training method with adaptive attack strength
Tong CHEN, Jiwei WEI, Shiyuan HE, Jingkuan SONG, Yang YANG
Journal of Computer Applications    2024, 44 (1): 94-100.   DOI: 10.11772/j.issn.1001-9081.2023060854
Abstract161)   HTML5)    PDF (1227KB)(96)       Save

The vulnerability of deep neural networks to adversarial attacks has raised significant concerns about the security and reliability of artificial intelligence systems. Adversarial training is an effective approach to enhance adversarial robustness. To address the issue that existing methods adopt fixed adversarial sample generation strategies but neglect the importance of the adversarial sample generation phase for adversarial training, an adversarial training method was proposed based on adaptive attack strength. Firstly, the clean sample and the adversarial sample were input into the model to obtain the output. Then, the difference between the model outputs of the clean sample and the adversarial sample was calculated. Finally, the change of the difference compared with the previous moment was measured to automatically adjust the strength of the adversarial sample. Comprehensive experimental results on three benchmark datasets demonstrate that compared with the baseline method Adversarial Training with Projected Gradient Descent (PGD-AT), the proposed method improves the robust precision under AA (AutoAttack) attack by 1.92, 1.50 and 3.35 percentage points on three benchmark datasets, respectively, and the proposed method outperforms the state-of-the-art defense method Adversarial Training with Learnable Attack Strategy (LAS-AT) in terms of robustness and natural accuracy. Furthermore, from the perspective of data augmentation, the proposed method can effectively address the problem of diminishing augmentation effect during adversarial training.

Table and Figures | Reference | Related Articles | Metrics
Medical image privacy protection based on thumbnail encryption and distributed storage
Na ZHOU, Ming CHENG, Menglin JIA, Yang YANG
Journal of Computer Applications    2023, 43 (10): 3149-3155.   DOI: 10.11772/j.issn.1001-9081.2022111646
Abstract174)   HTML6)    PDF (4138KB)(154)       Save

With the popularity of cloud storage services and telemedicine platforms, more and more medical images are uploaded to the cloud. After being uploaded, the uploaded medical images may be leaked to unauthorized third parties, resulting in the disclosure of users’ personal privacy. Besides, if medical images are only uploaded to a single server for storage, they are vulnerable to attacks resulting in the loss of all data. To solve these problems, a medical image privacy protection algorithm based on thumbnail encryption and distributed storage was proposed. Firstly, by encrypting the thumbnail of the original medical image, the relevance of the medical images was preserved properly while achieving the encryption effect. Secondly, the double embedding method was adopted when hiding secret information, and data extraction and image recovery were performed separately to achieve Reversible Data Hiding (RDH) of the encrypted image. Finally, the distributed storage method based on polynomial shared matrix was used to generate n shares of the image and distribute them to n servers. Experimental results show that by using the encrypted thumbnail as carrier, the proposed algorithm exceeds the traditional security encryption methods on embedding rate. Even if the server is attacked, the receiver can recover the original image and private information as long as it receives no less than k shares. In the privacy protection of medical images, experiments were carried out from the aspects of anti-attack and image recovery, and the analysis results show that the proposed encryption algorithm has good performance and high security.

Table and Figures | Reference | Related Articles | Metrics
Reversible data hiding in encrypted image based on multi-objective optimization
Xiangyu ZHANG, Yang YANG, Guohui FENG, Chuan QIN
Journal of Computer Applications    2022, 42 (6): 1716-1723.   DOI: 10.11772/j.issn.1001-9081.2021061495
Abstract303)   HTML14)    PDF (1250KB)(114)       Save

Focusing on the issues that the Reserving Room Before Encryption (RRBE) embedding algorithm requires a series of pre-processing work and Vacating Room After Encryption (VRAE) embedding algorithm has less embedding space, an algorithm of reversible data hiding in encrypted image based on multi-objective optimization was proposed to improve the embedding rate as well as reducing the algorithm process and workload. In this algorithm, two representative algorithms in RRBE and VRAE were combined and used in the same carrier, and performance evaluation indicators such as the amount of information embedded, distortion of direct decryption of image, extraction error rate, and computational complexity were formulated as the optimization sub-objectives. Then, the efficiency coefficient method was used to establish a model to solve the relative optimal solution of the application ratio of the two algorithms. Experimental results show that the proposed algorithm reduces the computational complexity of using RRBE algorithm alone, enables image processing users to flexibly allocate optimization objectives according to different needs in actual application scenarios, and at the same time obtains better image quality and a satisfactory amount of information embedding.

Table and Figures | Reference | Related Articles | Metrics
Degree centrality based method for cognitive feature selection
ZHANG Xiaofei, YANG Yang, HUANG Jiajin, ZHONG Ning
Journal of Computer Applications    2021, 41 (9): 2767-2772.   DOI: 10.11772/j.issn.1001-9081.2020111794
Abstract238)      PDF (2920KB)(401)       Save
To address the uncertainty of cognitive feature selection in brain atlas, a Degree Centrality based Cognitive Feature Selection Method (DC-CFSM) was proposed. First, the Functional Brain Network (FBN) of the subjects in the cognitive experiment tasks was constructed based on the brain atlas, and the Degree Centrality (DC) of each Region Of Interest (ROI) of the FBN was calculated. Next, the difference significances of the subjects' same cortical ROI under different cognitive states during executing cognitive task were statistically compared and ranked. Finally, the Human Brain Cognitive Architecture-Area Under Curve (HBCA-AUC) values were calculated for the ranked regions of interest, and the performances of several cognitive feature selection methods were evaluated. In the experiments on functional Magnetic Resonance Imaging (fMRI) data of mental arithmetic cognitive tasks, the values of HBCA-AUC obtained by DC-CFSM on the Task Positive System (TPS), Task Negative System (TNS), and Task Support System (TSS) of the human brain cognitive architecture were 0.669 2, 0.304 0 and 0.468 5 respectively. Compared with Extremely randomized Trees (Extra Trees), Adaptive Boosting (AdaBoost), random forest, and eXtreme Gradient Boosting (XGB), the recognition rate for TPS of DC-CFSM was increased by 22.17%, 13.90%, 24.32% and 37.19% respectively, while its misrecognition rate for TNS was reduced by 20.46%, 29.70%, 44.96% and 33.39% respectively. DC-CFSM can better reflect the categories and functions of the human brain cognitive system in the selection of cognitive features of brain atlas.
Reference | Related Articles | Metrics
Loop-level speculative parallelism analysis of kernel program in TACLeBench
MENG Huiling, WANG Yaobin, LI Ling, YANG Yang, WANG Xinyi, LIU Zhiqin
Journal of Computer Applications    2021, 41 (9): 2652-2657.   DOI: 10.11772/j.issn.1001-9081.2020111792
Abstract258)      PDF (1190KB)(219)       Save
Thread-Level Speculation (TLS) technology can tap the parallel execution potential of programs and improve the utilization of multi-core resources. However, the current TACLeBench kernel benchmarks are not effectively analyzed in TLS parallelization. In response to this problem, the loop-level speculative execution analysis scheme and analysis tool were designed. With 7 representative TACLeBench kernel benchmarks selected, firstly, the initialization analysis was performed to the programs, the program hot fragments were selected to insert the loop identifier. Then, the cross-compilation was performed to these fragments, the program speculative thread and the memory address related data were recorded, and the maximun potential of the loop-level parallelism was analyzed. Finally, the program runtime characteristics (thread granularity, parallelizable coverage, dependency characteristics) and the impacts of the source code on the speedup ratio were comprehensively discussed. Experimental results show that:1) this type of programs is suitable for TLS acceleration, compared with serial execution results, under the loop structure speculative execution, the speedup ratios for most programs are above 2, and the highest speedup ratio in them can reach 20.79; 2) by using TLS to accelerate the TACLeBench kernel programs, most applications can effectively make use of 4-core to 16-core computing resources.
Reference | Related Articles | Metrics
Location based service location privacy protection method based on location security in augmented reality
YANG Yang, WANG Ruchuan
Journal of Computer Applications    2020, 40 (5): 1364-1368.   DOI: 10.11772/j.issn.1001-9081.2019111982
Abstract324)      PDF (542KB)(320)       Save

Rapid development of Location Based Service (LBS) and Augmented Reality (AR) technology lead to the hidden danger of user location privacy leakage. After analyzing the advantages and disadvantages of existing location privacy protection methods, a location privacy protection method was proposed based on location security. The zone security degree and the camouflage region were introduced into the method, and the zone security was defined as a metric that indicates whether a zone needs protection. The zone security degree of insecure zones (zones need to be protected) was set to 1 while that of secure zones (zones not need to be protected) was set to 0. And the location security degree was calculated by expanding zone security degree and recognition levels. Experimental results show that, compared with the method without introducing location security, this method can reduce average location error and enhance average security, therefore effectively protecting the user location privacy and increasing the service quality of LBS.

Reference | Related Articles | Metrics
Greedy core acceleration dynamic programming algorithm for solving discounted {0-1} knapsack problem
SHI Wenxu, YANG Yang, BAO Shengli
Journal of Computer Applications    2019, 39 (7): 1912-1917.   DOI: 10.11772/j.issn.1001-9081.2018112393
Abstract669)      PDF (860KB)(367)       Save

As the existing dynamic programming algorithm cannot quickly solve Discounted {0-1} Knapsack Problem (D{0-1}KP), based on the idea of dynamic programming and combined with New Greedy Repair Optimization Algorithm (NGROA) and core algorithm, a Greedy Core Acceleration Dynamic Programming (GCADP) algorithm was proposed with the acceleration of the problem solving by reducing the problem scale. Firstly, the incomplete item was obtained based on the greedy solution of the problem by NGROA. Then, the radius and range of fuzzy core interval were found by calculation. Finally, Basic Dynamic Programming (BDP) algorithm was used to solve the items in the fuzzy core interval and the items in the same item set. The experimental results show that GCADP algorithm is suitable for solving D{0-1}KP. Meanwhile, the average solution speed of GCADP improves by 76.24% and 75.07% respectively compared with that of BDP algorithm and FirEGA (First Elitist reservation strategy Genetic Algorithm).

Reference | Related Articles | Metrics
Robust multi-manifold discriminant local graph embedding based on maximum margin criterion
YANG Yang, WANG Zhengqun, XU Chunlin, YAN Chen, JU Ling
Journal of Computer Applications    2019, 39 (5): 1453-1458.   DOI: 10.11772/j.issn.1001-9081.2018102113
Abstract394)      PDF (900KB)(261)       Save
In most existing multi-manifold face recognition algorithms, the original data with noise are directly processed, but the noisy data often have a negative impact on the accuracy of the algorithm. In order to solve the problem, a Robust Multi-Manifold Discriminant Local Graph Embedding algorithm based on the Maximum Margin Criterion (RMMDLGE/MMC) was proposed. Firstly, a denoising projection was introduced to process the original data for iterative noise reduction, and the purer data were extracted. Secondly, the data image was divided into blocks and a multi-manifold model was established. Thirdly, combined with the idea of maximum margin criterion, an optimal projection matrix was sought to maximize the sample distances on different manifolds while to minimize the sample distances on the same manifold. Finally, the distance from the test sample manifold to the training sample manifold was calculated for classification and identification. The experimental results show that, compared with Multi-Manifold Local Graph Embedding algorithm based on the Maximum Margin Criterion (MLGE/MMC) which performs well, the classification recognition rate of the proposed algorithm is improved by 1.04, 1.28 and 2.13 percentage points respectively on ORL, Yale and FERET database with noise and the classification effect is obviously improved.
Reference | Related Articles | Metrics
New simplified model of discounted {0-1} knapsack problem and solution by genetic algorithm
YANG Yang, PAN Dazhi, LIU Yi, TAN Dailun
Journal of Computer Applications    2019, 39 (3): 656-662.   DOI: 10.11772/j.issn.1001-9081.2018071580
Abstract579)      PDF (1164KB)(362)       Save
Current Discounted {0-1} Knapsack Problem (D{0-1}KP) model takes the discounted relationship as a new individual, so the repair method must be adopted in the solving process to repair the individual coding, making the model have less solving methods. In order to solve the problem of single solving method, by changing the binary code expression in the model, an expression method with discounted relationship out of individual code was proposed. Firstly, if and only if each involved individual encoding value was one (which means the product was one), the discounted relationship was established. According to this setting, a Simplified Discounted {0-1} Knapsack Problem (SD{0-1}KP) model was established. Then, an improved genetic algorithm-FG (First Gentic algorithm) was proposed based on Elitist Reservation Strategy (EGA) and GREedy strategy (GRE) for SD{0-1}KP model. Finally, combining penalty function method, a high precision penalty function method-SG (Second Genetic algorithm) for SD{0-1}KP was proposed. The results show that the SD{0-1}KP model can fully cover the problem domain of D{0-1}KP. Compared with FirEGA (First Elitist reservation strategy Genetic Algorithm), the two algorithms proposed have obvious advantages in solving speed. And SG algorithm introduces the penalty function method for the first time, which enriches the solving methods of the problem.
Reference | Related Articles | Metrics
Multi-attribute decision making method based on Pythagorean fuzzy Frank operator
PENG Dinghong, YANG Yang
Journal of Computer Applications    2019, 39 (2): 316-322.   DOI: 10.11772/j.issn.1001-9081.2018061195
Abstract675)      PDF (888KB)(363)       Save
To solve the multi-attribute decision making problems in Pythagorean fuzzy environment, a multi-attribute decision making method based on Pythagorean fuzzy Frank operator was proposed. Firstly, Pythagorean fuzzy number and Frank operator were combined to obtain the operation rule based on Frank operator. Then the Pythagorean fuzzy Frank operator was proposed, including Pythagorean fuzzy Frank weighted average operator and Pythagorean fuzzy Frank weighted geometric operator, and the properties of these operators were discussed. Finally, a multi-attribute decision making method based on Pythagorean fuzzy Frank operator was proposed, which was applied to an example of green supplier selection. The example analysis shows that the proposed method can be used to solve the actual multi-attribute decision making problems, and can be further applied to areas such as risk management and artificial intelligence.
Reference | Related Articles | Metrics
Siamese detection network based real-time video tracking algorithm
DENG Yang, XIE Ning, YANG Yang
Journal of Computer Applications    2019, 39 (12): 3440-3444.   DOI: 10.11772/j.issn.1001-9081.2019081427
Abstract507)      PDF (787KB)(393)       Save
Currently, in the field of video tracking, the typical Siamese network based algorithms only locate the center point of target, which results in poor locating performance on fast-deformation objects. Therefore, a real-time video tracking algorithm based on Siamese detection network called Siamese-FC Region-convolutional neural network (SiamRFC) was proposed. SiamRFC can directly predict the center position of the target, thus dealing with the rapid deformation. Firstly, the position of the center point of the target was obtained by judging the similarity. Then, the idea of object detection was used to return the optimal position by selecting a series of candidate boxes. Experimental results show that SiamRFC has good performance on the VOT2015|16|17 test sets.
Reference | Related Articles | Metrics
Reversible data hiding method based on texture partition for medical images
CAI Xue, YANG Yang, XIAO Xingxing
Journal of Computer Applications    2018, 38 (8): 2293-2300.   DOI: 10.11772/j.issn.1001-9081.2017122885
Abstract482)      PDF (1397KB)(350)       Save
To solve the problem that contrast enhancement effect is affected by the embedding rate in most existing Reversible Data Hiding (RDH) algorithms, a new RDH method based on texture partition for medical images was proposed. Firstly, the contrast of an image was stretched to enhance image contrast, and then according to the characteristics of medical image texture, the medical image was divided into high and low texture levels. The key partion of the medical image mainly had high texture level. To enhance the contrast of high texture level further and guarantee the infomation embedding capacity, different embedding processes were adopted for high and low texture levels. In order to compare the effect of contrast enhancement between the proposed method and other RDH algorithms for medical images, No-Reference Contrast-Distorted Images Quality Assessment (NR-CDIQA) was adopted as the evaluation standards. The experimental results show that the marked images processed by the proposed method can get better NR-CDIQA and contrst enhancement in different embedding rate.
Reference | Related Articles | Metrics
Six-legged robot path planning algorithm for unknown map
YANG Yang, TONG Dongbing, CHEN Qiaoyu
Journal of Computer Applications    2018, 38 (6): 1809-1813.   DOI: 10.11772/j.issn.1001-9081.2017112671
Abstract434)      PDF (830KB)(335)       Save
The global map cannot be accurately known in the path planning of mobile robots. In order to solve the problem, a local path planning algorithm based on fuzzy rules and artificial potential field method was proposed. Firstly, the ranging group and fuzzy rules were used to classify the shape of obstacles and construct the local maps. Secondly, a modified repulsive force function was introduced in the artificial potential field method. Based on the local maps, the local path planning was performed by using the artificial potential field method. Finally, with the movement of robot, time breakpoints were set to reduce path oscillation. For the maps of random obstacles and bumpy obstacles, the traditional artificial potential field method and the improved artificial potential field method were respectively used for simulation. The experimental results show that, in the case of random obstacles, compared with the traditional artificial potential field method, the improved artificial potential field method can significantly reduce the collision of obstacles; in the case of bumpy obstacles, the improved artificial potential field method can successfully complete the goal of path planning. The proposed algorithm is adaptable to terrain changes, and can realize the path planning of six-legged robot under unknown maps.
Reference | Related Articles | Metrics
Data preprocessing based recovery model in wireless meteorological sensor network
WANG Jun, YANG Yang, CHENG Yong
Journal of Computer Applications    2016, 36 (10): 2647-2652.   DOI: 10.11772/j.issn.1001-9081.2016.10.2647
Abstract674)      PDF (1082KB)(693)       Save
To solve the problem of excessive communication energy consumption caused by large number of sensor nodes and high redundant sensor data in wireless meteorological sensor network, a Data Preprocessing Model based on Joint Sparsity (DPMJS) was proposed. By combining the meteorological forecast value with every cluster head's value in Wireless Sensor Network (WSN), DPMJS was used to compute a common portions to process sensor data. A data collection framework based on distributed compressed sensing was also applied to reduce data transmission and balance energy consumption in cluster network; data measured in common nodes was recovered in sink node, so as to reduce data communication radically. A suitable method to sparse the abnormal data was also designed. In simulation, using DPMJS can enhance the data sparsity by exploiting spatio-temporal correlation efficiently, and improve data recovery rate by 25%; compared with compressed sensing, data recovery rate is improved by 46%; meanwhile, the abnormal data processing can recovery data successfully by high probability of 96%. Experimental results indicate that the proposed data preprocessing model can increase efficiency of data recovery, reduce the amount of transmission significantly, and prolong the network lifetime.
Reference | Related Articles | Metrics
Frequency offset tracking and estimation algorithm in orthogonal frequency division multiplexing based on improved strong tracking unscented Kalman filter
YANG Zhaoyang YANG Xiaopeng LI Teng YAO Kun ZHANG Hengyang
Journal of Computer Applications    2014, 34 (8): 2248-2251.   DOI: 10.11772/j.issn.1001-9081.2014.08.2248
Abstract401)      PDF (697KB)(584)       Save

Towards the large frequency offset caused by Doppler effect in high speed moving environment, a dynamic state space model of Orthogonal Frequency Division Multiplexing (OFDM) was built, and a kind of frequency offset tracking and estimation algorithm in OFDM based on improved Strong Tracking Unscented Kalman Filter (STUKF) was proposed. By combining strong tracking filter theory and UKF together, the fading factor was introduced during the process of calculating the measurement predictive covariance and cross covariance. The frequency offset estimation error covariance was adjusted; meanwhile, the process noise covariance was also controlled, and the gain matrix was adjusted in real-time. So the tracking ability to time-varying frequency offset was enhanced and the estimated accuracy was raised. The simulation test was carried out in time-invariant and time-varying frequency offset models. The simulation results show that the proposed algorithm has better tracking and estimation performance than the UKF frequency offset estimation algorithm, the Signal-to-Noise Ratio (SNR) raises about 1dB under the same Bit Error Rate (BER).

Reference | Related Articles | Metrics
Novel blind frequency offset estimation algorithm in orthogonal frequency division multiplexing system based on particle swarm optimization
YANG Zhaoyang YANG Xiaopeng LI Teng YAO Kun NI
Journal of Computer Applications    2014, 34 (10): 2787-2790.   DOI: 10.11772/j.issn.1001-9081.2014.10.2787
Abstract155)      PDF (763KB)(397)       Save

To estimate the frequency offset in Orthogonal Frequency Division Multiplexing (OFDM) system, a novel blind frequency offset estimation algorithm based on Particle Swarm Optimization (PSO) method was proposed. Firstly the mathematical model and cost function were designed according to the principle of minimum reconstruction error of the reconstructed signal and the signal actually received. The powerful random, parallel, global search property of PSO was utilized to minimize the cost function to get the frequency offset estimation. Two inertia weight strategies for PSO algorithm of constant coefficient and differential descending were simulated, and comparison was made with the minimum output variance and gold section methods. The simulation results show that the proposed algorithm performs highly accuracy, about one order of magnitude higher than other similar algorithms in same Signal-to-Noise Ratio (SNR) and it is not restricted by modulation type and frequency estimation range (-0.5,0.5).

Reference | Related Articles | Metrics
Hadoop-based storage architecture for mass MP3 files
ZHAO Xiao-yong YANG Yang SUN Li-li CHEN Yu
Journal of Computer Applications    2012, 32 (06): 1724-1726.   DOI: 10.3724/SP.J.1087.2012.01724
Abstract1004)      PDF (431KB)(783)       Save
MP3 as the de facto standard for digital music, the number of files is quite large and user access requirements rapidly grow up. How to effectively store and manage vast amounts of MP3 files to provide good user experience has become a matter of bigger concern. The emergence of Hadoop provides us with new ideas. However, because Hadoop itself is not suitable for handling massive small files, this paper presented a Hadoop-based storage architecture for massive MP3 files, fullly using the MP3 file’s rich metadata. The classification algorithm by pre-processing module would merged small files into sequence file, and the introduction of efficient indexing mechanism served as a good solution to the problem of small files. The experimental results show that the approach can achieve a better performance.
Related Articles | Metrics
Collaborative filtering and recommendation algorithm based on matrix factorization and user nearest neighbor model
YANG Yang XIANG Yang XIONG Lei
Journal of Computer Applications    2012, 32 (02): 395-398.   DOI: 10.3724/SP.J.1087.2012.00395
Abstract1462)      PDF (660KB)(1418)       Save
Concerning the difficulty of data sparsity and new user problems in many collaborative recommendation algorithms, a new collaborative recommendation algorithm based on matrix factorization and user nearest neighbor was proposed. To guarantee the prediction accuracy of the new users, the user nearest neighbor model based on user data and profile information was used. Meanwhile, large data sets and the problem of matrix sparsity would significantly increase the time and space complexity. Therefore, matrix factorization was introduced to alleviate the effect of data problems and improve the prediction accuracy. The experimental results show that the new algorithm can improve the recommendation accuracy effectively, and solve the problems of data sparsity and new user.
Reference | Related Articles | Metrics
Application of fuzzy decision trees to the public critical system
Yang Yang
Journal of Computer Applications   
Abstract1728)      PDF (610KB)(1451)       Save
Nowadays the data in the real police database of the public critical system is explosive and hard to classify. To solve this problem, a revised fuzzy decision trees algorithm combined with the Genetic Algorithm was proposed in this paper. The forecast rate of the decision trees and the comprehensibility of the rules were improved by using this method. Meanwhile, the decision tree classifier based on the algorithm was designed to help the policemen not only to classify the old items, but also to forecast the new critical events accurately and quickly.
Related Articles | Metrics
Off-line handwritten amount in words recognition using classifiers combination method based on HMM
WANG Xian-mei,YANG Yang,WANG Hong
Journal of Computer Applications    2005, 25 (12): 2925-2927.  
Abstract1540)      PDF (557KB)(1263)       Save
A new combination scheme was proposed.First,three individual classifiers were constructed with normalized wavelet,stroke density and the percentage of the number of black pixels.Then the combination based on different methods was studied.The experiment results show that the proposed approach can achieve high performance.
Related Articles | Metrics
Improved Dezert-Smarandache theory and its application in target recognition
MIAO Zhuang,CHENG Yong-mei,LIANG Yan,PAN Quan,YANG Yang
Journal of Computer Applications    2005, 25 (09): 2044-2046.   DOI: 10.3724/SP.J.1087.2005.02044
Abstract895)      PDF (178KB)(1038)       Save
The Dezert-Smarandache Theory(DSmT) is more desirable than the D-S Theory in the case of solving conflicting evidence.However,the mass function of the main focal element is difficult to converge in many cases while applying DSmT.The new mass values were reconstructed to solve this problem.An improved DSmT was proposed so that the mass value of main element could quickly converge.Simulation results of target recognition based on 2D sequence images of airplanes demonstrate that the revised mass value of main focal element has better convergence to the desired threshold and consequently the task of target recognition is accomplished more precisely.
Related Articles | Metrics
Skew angle detection and correction of document images based on Hough transform推
LI Zheng, YANG Yang, XIE Bin, WANG Hong
Journal of Computer Applications    2005, 25 (03): 583-585.   DOI: 10.3724/SP.J.1087.2005.0583
Abstract1124)      PDF (155KB)(1818)       Save

The document images scanned may be skew somehow. Severe image skew makes image segmentation difficult and lowers character recognition accuracy. A new approach of skew detection based on Hough transform was presented. In order to overcome the heavy computing burdens of Hough transform,the method selected the subfield with part representation and extracted the horizontal edge from images in the first place, then performed two-stage Hough transform on the edge extracted. Experiment results show that it corrects the skew document images more rapidly and accurately than general Hough method and cross relation method.

Related Articles | Metrics